物理学的美在于,通常在变化的系统(称为运动常数)中保守数量。找到运动的常数对于理解系统的动力学很重要,但通常需要数学水平和手动分析工作。在本文中,我们提出了一个神经网络,该网络可以同时了解系统的动力学和来自数据的运动常数。通过利用发现的运动常数,它可以对动态产生更好的预测,并且可以比基于哈密顿的神经网络在更广泛的系统上工作。此外,我们方法的训练进展可以用作系统中运动常数数量的指示,该系统可用于研究新型物理系统。
translated by 谷歌翻译
能量保护是许多物理现象和动态系统的核心。在过去的几年中,有大量作品旨在预测使用神经网络的动力系统运动轨迹,同时遵守能源保护法。这些作品中的大多数受到古典力学的启发,例如哈密顿和拉格朗日力学以及神经普通微分方程。尽管这些作品已被证明在特定领域中分别很好地工作,但缺乏统一的方法,该方法通常不适用,而无需对神经网络体系结构进行重大更改。在这项工作中,我们旨在通过提供一种简单的方法来解决此问题,该方法不仅可以应用于能源持持势的系统,还可以应用于耗散系统,通过在不同情况下以不同的情况在不同情况下以正规化术语形式包括不同的归纳偏见。损失功能。所提出的方法不需要更改神经网络体系结构,并且可以构成验证新思想的基础,因此表明有望在这个方向上加速研究。
translated by 谷歌翻译
基于深度学习的模型越来越多地用于模拟科学模拟以加速科学研究。然而,准确,监督的深度学习模型需要大量的标记数据,并且通常成为采用神经网络的瓶颈。在这项工作中,我们利用了一个名为Core-Set选择的主动学习方法,以便根据预定义预算主动选择数据,以标记为培训。为了进一步提高模型性能并降低培训成本,我们也温暖开始使用缩小和扰动技巧进行培训。我们在不同领域的两种情况下测试了两种情况,即血浆物理学中的天体物理学和X射线发射光谱中的Galaxy Halo职业分布模型,结果是有前途的:与使用随机抽样基线相比,我们实现了竞争性的整体性能,更重要的是,成功降低了较大的绝对损失,即损耗分布的长尾,几乎没有开销成本。
translated by 谷歌翻译
Neural Radiance Fields (NeRF) methods have proved effective as compact, high-quality and versatile representations for 3D scenes, and enable downstream tasks such as editing, retrieval, navigation, etc. Various neural architectures are vying for the core structure of NeRF, including the plain Multi-Layer Perceptron (MLP), sparse tensors, low-rank tensors, hashtables and their compositions. Each of these representations has its particular set of trade-offs. For example, the hashtable-based representations admit faster training and rendering but their lack of clear geometric meaning hampers downstream tasks like spatial-relation-aware editing. In this paper, we propose Progressive Volume Distillation (PVD), a systematic distillation method that allows any-to-any conversions between different architectures, including MLP, sparse or low-rank tensors, hashtables and their compositions. PVD consequently empowers downstream applications to optimally adapt the neural representations for the task at hand in a post hoc fashion. The conversions are fast, as distillation is progressively performed on different levels of volume representations, from shallower to deeper. We also employ special treatment of density to deal with its specific numerical instability problem. Empirical evidence is presented to validate our method on the NeRF-Synthetic, LLFF and TanksAndTemples datasets. For example, with PVD, an MLP-based NeRF model can be distilled from a hashtable-based Instant-NGP model at a 10X~20X faster speed than being trained the original NeRF from scratch, while achieving a superior level of synthesis quality. Code is available at https://github.com/megvii-research/AAAI2023-PVD.
translated by 谷歌翻译
Several works have proven that finetuning is an applicable approach for debiasing contextualized word embeddings. Similarly, discrete prompts with semantic meanings have shown to be effective in debiasing tasks. With unfixed mathematical representation at the token level, continuous prompts usually surpass discrete ones at providing a pre-trained language model (PLM) with additional task-specific information. Despite this, relatively few efforts have been made to debias PLMs by prompt tuning with continuous prompts compared to its discrete counterpart. Furthermore, for most debiasing methods that alter a PLM's original parameters, a major problem is the need to not only decrease the bias in the PLM but also to ensure that the PLM does not lose its representation ability. Finetuning methods typically have a hard time maintaining this balance, as they tend to violently remove meanings of attribute words. In this paper, we propose ADEPT, a method to debias PLMs using prompt tuning while maintaining the delicate balance between removing biases and ensuring representation ability. To achieve this, we propose a new training criterion inspired by manifold learning and equip it with an explicit debiasing term to optimize prompt tuning. In addition, we conduct several experiments with regard to the reliability, quality, and quantity of a previously proposed attribute training corpus in order to obtain a clearer prototype of a certain attribute, which indicates the attribute's position and relative distances to other words on the manifold. We evaluate ADEPT on several widely acknowledged debiasing benchmarks and downstream tasks, and find that it achieves competitive results while maintaining (and in some cases even improving) the PLM's representation ability. We further visualize words' correlation before and after debiasing a PLM, and give some possible explanations for the visible effects.
translated by 谷歌翻译
在本文中,研究了无线网络的联合学习(FL)。在每个通信回合中,选择一部分设备以有限的时间和能量参与聚合。为了最大程度地减少收敛时间,在基于Stackelberg游戏的框架中共同考虑了全球损失和延迟。具体而言,在Leader级别上,将基于信息的设备选择(AOI)选择为全球损失最小化问题,而子渠道分配,计算资源分配和功率分配在追随者级别被视为延迟最小化问题。通过将追随者级别的问题分为两个子问题,追随者的最佳响应是通过基于单调优化的资源分配算法和基于匹配的子渠道分配算法获得的。通过得出收敛速率的上限,重新制定了领导者级别的问题,然后提出了基于列表的设备选择算法来实现Stackelberg平衡。仿真结果表明,所提出的设备选择方案在全球损失方面优于其他方案,而开发的算法可以显着降低计算和通信的时间消耗。
translated by 谷歌翻译
本文研究了一个新的在线学习问题,其中包含双流式数据,其中数据流是通过不断发展的特征空间来描述的,新的功能出现了,旧功能逐渐消失。这个问题的挑战是两个折叠:1)随着时间的推移,数据样本不断流动,可能会随着时间的推移而随着时间的流逝而携带移动的模式,因此学习者可以随时更新。 2)很少的样本描述了新出现的特征,从而导致较弱的学习者倾向于做出错误预测。克服挑战的一个合理的想法是在前进的特征空间之间建立关系,以便在线学习者可以利用从旧功能中学到的知识来改善新功能的学习性能。不幸的是,这个想法并没有扩展到具有复杂功能相互作用的高维媒体流,这在善于跨性(偏见的浅学习者)和表现力(需要深度学习者)之间的权衡受到了折衷。在此激励的情况下,我们提出了一种新颖的旧^3S范式,其中发现了一个共享的潜在子空间来总结旧功能空间中的信息,从而构建了中间功能映射关系。旧^3S的关键特征是将模型容量视为可学习的语义,根据在线方式以输入数据流的复杂性和非线性,共同产生最佳模型深度和参数。理论分析和实证研究都证实了我们提议的生存能力和有效性。
translated by 谷歌翻译
索赔检测和验证对于新闻认识至关重要,并且已成为有前途的技术,以减轻新闻中的错误信息。然而,大多数现有的工作侧重于索赔句子的分析,同时俯瞰关键背景属性,例如索引者,声称对象和连接到索赔的其他知识。在这项工作中,我们提供了新闻本,新的基准,了解新闻领域的知识意识索赔检测。我们重新定义了索赔探测问题,包括提取与索赔相关的附加背景属性,并发布529索赔由103个新闻文章提示。此外,报讯人旨在在新兴场景中索取索赔检测系统,包括不少培训数据的看不见的主题。最后,我们对这款新基准测试提供了对各种零射和及时的基础基准的全面评估。
translated by 谷歌翻译
常规的识别抑郁症的方法无法扩展,公众对心理健康的认识有限,尤其是在发展中国家。从最近的研究中可以明显看出,社交媒体有可能更涉及心理健康筛查。按时间顺序排列的大量第一人称叙事帖子可以在一段时间内为人们的思想,感觉,行为或情绪提供见解,从而更好地理解在线空间中反映的抑郁症状。在本文中,我们提出了SERCNN,该文章通过(1)从不同域中堆叠两个预处理的嵌入方式以及(2)将嵌入环境重新引入MLP分类器来改善用户表示。我们的Sercnn在最先进的基线和其他基线方面表现出色,在5倍的交叉验证设置中达到93.7%的精度。由于并非所有用户都共享相同级别的在线活动,因此我们介绍了固定观察窗口的概念,该窗口量化了预定义的帖子中的观察期。 Sercnn的精度非常出色,其精度与BERT模型相当,而参数数量却少98%,Sercnn的表现出色,其精度非常出色。我们的发现为在社交媒体上检测抑郁症的方向开辟了一个有希望的方向,并较少的推断帖子,以为具有成本效益和及时干预的解决方案。我们希望我们的工作能够使该研究领域在现有临床实践中更接近现实世界的采用。
translated by 谷歌翻译
In data containing heterogeneous subpopulations, classification performance benefits from incorporating the knowledge of cluster structure in the classifier. Previous methods for such combined clustering and classification either 1) are classifier-specific and not generic, or 2) independently perform clustering and classifier training, which may not form clusters that can potentially benefit classifier performance. The question of how to perform clustering to improve the performance of classifiers trained on the clusters has received scant attention in previous literature, despite its importance in several real-world applications. In this paper, first, we theoretically analyze the generalization performance of classifiers trained on clustered data and find conditions under which clustering can potentially aid classification. This motivates the design of a simple k-means-based classification algorithm called Clustering Aware Classification (CAC) and its neural variant {DeepCAC}. DeepCAC effectively leverages deep representation learning to learn latent embeddings and finds clusters in a manner that make the clustered data suitable for training classifiers for each underlying subpopulation. Our experiments on synthetic and real benchmark datasets demonstrate the efficacy of DeepCAC over previous methods for combined clustering and classification.
translated by 谷歌翻译